Concept learning using complexity regularization

نویسندگان

  • Gábor Lugosi
  • Kenneth Zeger
چکیده

We apply the method of complexity regularization to learn concepts from large concept classes. The method is shown to automatically find a good balance between the approximation error and the estimation error. In particular, the error probability of the obtained classifier is shown to decrease as O(dm) to the achievable optimum, for large nonparametric classes of distributions, as the sample size n grows. We also show that if the Bayes error probability is zero and the Bayes rule is in a known family of decision rules, the error probability is O(log n/n) for many large families, possibly with infinite VC dimension.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On regularization algorithms in learning theory

In this paper we discuss a relation between Learning Theory and Regularization of linear ill-posed inverse problems. It is well known that Tikhonov regularization can be profitably used in the context of supervised learning, where it usually goes under the name of regularized least-squares algorithm. Moreover the gradient descent algorithm was studied recently, which is an analog of Landweber r...

متن کامل

Regularization and the small-ball method II: complexity dependent error rates

For a convex class of functions F , a regularization functions Ψ(·) and given the random data (Xi, Yi) N i=1, we study estimation properties of regularization procedures of the form f̂ ∈ argmin f∈F ( 1 N N ∑ i=1 ( Yi − f(Xi) )2 + λΨ(f) ) for some well chosen regularization parameter λ. We obtain bounds on the L2 estimation error rate that depend on the complexity of the “true model” F ∗ := {f ∈ ...

متن کامل

On Different Facets of Regularization Theory

This review provides a comprehensive understanding of regularization theory from different perspectives, emphasizing smoothness and simplicity principles. Using the tools of operator theory and Fourier analysis, it is shown that the solution of the classical Tikhonov regularization problem can be derived from the regularized functional defined by a linear differential (integral) operator in the...

متن کامل

Manifold Based Low-Rank Regularization for Image Restoration and Semi-Supervised Learning

Low-rank structures play important role in recent advances of many problems in image science and data science. As a natural extension of low-rank structures for data with nonlinear structures, the concept of the low-dimensional manifold structure has been considered in many data processing problems. Inspired by this concept, we consider a manifold based low-rank regularization as a linear appro...

متن کامل

Generalization Bounds of Regularization Algorithms Derived Simultaneously through Hypothesis Space Complexity, Algorithmic Stability and Data Quality

A main issue in machine learning research is to analyze the generalization performance of a learning machine. Most classical results on the generalization performance of regularization algorithms are derived merely with the complexity of hypothesis space or the stability property of a learning algorithm. However, in practical applications, the performance of a learning algorithm is not actually...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • IEEE Trans. Information Theory

دوره 42  شماره 

صفحات  -

تاریخ انتشار 1996